Skip to content

Conversation

frisitano
Copy link
Collaborator

@frisitano frisitano commented Jul 2, 2025

closes: #182
closes scroll-tech/reth#243

@frisitano frisitano marked this pull request as ready for review July 16, 2025 14:10
@frisitano frisitano requested a review from Copilot July 16, 2025 14:13
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements a new ChainOrchestrator to replace the previous indexer, integrates it throughout the node, watcher, network, and engine, and updates tests and database migrations accordingly.

  • Introduces ChainOrchestrator in place of Indexer, refactors RollupNodeManager to consume orchestrator events instead of indexer events.
  • Adds Synced notifications to L1Watcher and updates engine driver to handle optimistic sync via ChainOrchestrator.
  • Refactors configuration (ScrollRollupNodeConfig), network manager, and database migrations; adjusts tests to cover the new orchestrator flows.

Reviewed Changes

Copilot reviewed 40 out of 41 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
crates/indexer/src/lib.rs Rename Indexer to ChainOrchestrator and overhaul API flows
crates/manager/src/manager/mod.rs Replace indexer usage with ChainOrchestrator in node manager
crates/node/src/args.rs Instantiate ChainOrchestrator in ScrollRollupNodeConfig
crates/watcher/src/lib.rs Add Synced variant and is_synced flag to L1Watcher
crates/scroll-wire/src/protocol/proto.rs Adjust doc comment for NewBlock::new
crates/node/tests/e2e.rs Add/revise reorg and sync end-to-end tests
crates/watcher/tests/reorg.rs Update tests to skip Synced notifications
crates/database/db/src/operations.rs Extend DB ops with L1MessageStart and block-and-batch queries
crates/database/migration/src/migration_info.rs Add genesis_hash() to migrations and insert genesis blocks
crates/network/src/manager.rs Wire up eth-wire listener and dispatch chain-orchestrator events
crates/engine/src/driver.rs Support ChainImport and OptimisticSync futures in engine driver
Comments suppressed due to low confidence (2)

crates/scroll-wire/src/protocol/proto.rs:33

  • The doc comment uses "blocks" (plural) but the constructor takes a single block; change to "block" for accuracy.
    /// Returns a [`NewBlock`] instance with the provided signature and blocks.

crates/node/tests/e2e.rs:95

  • The follower_can_reorg test has no assertions; either add meaningful checks or remove the empty test to maintain coverage.
async fn follower_can_reorg() -> eyre::Result<()> {

@frisitano frisitano requested a review from greged93 July 17, 2025 17:14
@greged93 greged93 mentioned this pull request Jul 22, 2025
Copy link
Collaborator

@greged93 greged93 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reviewing l2geth code, I noticed we might be missing checks on L1 messages.

greged93
greged93 previously approved these changes Aug 11, 2025
Copy link
Collaborator

@greged93 greged93 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just need to resolve the pending comments and open 1 issues related to L1 watcher sync.

Comment on lines -215 to +233
if self.is_synced() {
if self.is_synced {
tokio::time::sleep(SLOW_SYNC_INTERVAL).await;
} else if self.current_block_number == self.l1_state.head {
// if we have synced to the head of the L1, notify the channel and set the
// `is_synced`` flag.
if let Err(L1WatcherError::SendError(_)) = self.notify(L1Notification::Synced).await
{
tracing::warn!(target: "scroll::watcher", "L1 watcher channel closed, stopping the watcher");
break;
}
self.is_synced = true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree let's address this in another PR

let parent_hash =
optimistic_headers.first().expect("chain can not be empty").parent_hash;
let header = network_client
.get_header(BlockHashOrNumber::Hash(parent_hash))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will request up to 2000 headers (by default == CHAIN_BUFFER_SIZE) one by one from other nodes. Is this efficient? Shoudldn't we use GetBlockHeaders for this? Or why request the headers at all? We could also wait for the EN to finish the sync and then load from our EN?

Copy link
Collaborator Author

@frisitano frisitano Aug 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this efficient? Shouldn't we use GetBlockHeaders for this?

Yes, a more efficient means would be to use GetBlockHeaders.

We could also wait for the EN to finish the sync and then load from our EN?

I think this is the best solution.

I will add both of these points to the refactoring issue that I create shortly.

// Check if we have already have this block in memory.
if received_block.number <= max_block_number &&
received_block.number >= min_block_number &&
current_chain_headers.iter().any(|h| h == &received_block.header)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How expensive is this? Do we expect this to become a bottleneck with high throughput?

Copy link
Collaborator Author

@frisitano frisitano Aug 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can improve the implementation to be much more efficient by leveraging the EN. I think we can completely eliminate the in-memory buffer which should make this solution much more efficient. I will write this up in the issue.

return Err(ChainOrchestratorError::L2SafeBlockReorgDetected);
}

tracing::trace!(target: "scroll::chain_orchestrator", number = ?(received_chain_headers.last().expect("chain can not be empty").number - 1), "fetching block");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't we store received_chain_headers.last() in a variable somewhere to avoid calling the expect dozens of times?

// If the received fork is older than the current chain, we return an event
// indicating that we have received an old fork.
if (pos < current_chain_headers.len() - 1) &&
current_chain_headers.get(pos + 1).expect("chain can not be empty").timestamp >
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like quite important: we don't react to reorgs based on timestamp of a block. what if the sequencer's time is faulty?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree that this is important. What do you think the rule should be?

Copy link
Contributor

@jonastheis jonastheis Aug 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what it should be. For now it's okay like this, but we should discuss it. Maybe best to capture in an issue so we can revisit if necessary and don't forget

};
self.inner_network_handle().eth_wire_announce_block(eth_wire_new_block, hash);
}
let eth_wire_new_block = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we actually need to compute the total difficulty here? isn't it part the block header already? eth_wire_block.header.difficulty or is this a different difficulty?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that's the block difficulty and not the total difficulty?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah yeah you're right I confused it with totalDifficulty which is also returned from API. It's a computed value and our computation is simplified since we have the fixed offset.

cast block --rpc-url https://rpc.scroll.io latest


baseFeePerGas        120008
difficulty           1
extraData            0x
gasLimit             20000000
gasUsed              266502
hash                 0xa5c5ceda288f430b5ac8ccf27c7c8646f54aa299a0fde0689b80d365bfc65286
logsBloom            0x0000008000000000000000000000000000020000000000000000000000000000040000000000000000000001000000000000600000002000000200000000000000000000000000080000000800000000000000004000000420000000000200000000002000100000000000000000000002000000004000000000001000080000000000000000000000000000000000000000006000020040000000000000000000000000000000000000000400000000002000000000000000008000000000200000000200000000000010000000000000040000000000020000000000000000000000000000000200000000000000000000000000008000000000000a000000
miner                0x0000000000000000000000000000000000000000
mixHash              0x0000000000000000000000000000000000000000000000000000000000000000
nonce                0x0000000000000000
number               20063317
parentHash           0x0f8160698ab23ebf35154115bace014f19bf8658ed8c84bbfe1dcd0ac3fdd823
parentBeaconRoot
transactionsRoot     0x66043dba38623338f7e845a00709339ca82dd715e46117c73a1ef957d1125548
receiptsRoot         0x4285fb1b3cf957fe86f734635a3fa26901d24c9894930957282727cee3c9f790
sha3Uncles           0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347
size                 766
stateRoot            0x6ece91f3f6c025928c08cf1a6abfae4aec074694b4125dc6703504cb0e40a3d7
timestamp            1756111046 (Mon, 25 Aug 2025 08:37:26 +0000)
withdrawalsRoot
totalDifficulty      34970277
blobGasUsed
excessBlobGas
requestsHash
transactions:        [
	0xb2f7bf405118615de93d0834cb0cecd6dc7516d708fad959c9ba39feaf5a1d88
]

jonastheis
jonastheis previously approved these changes Aug 25, 2025
@frisitano frisitano merged commit aea3c93 into main Aug 25, 2025
14 of 16 checks passed
@frisitano frisitano deleted the feat/chain-orchestrator branch August 25, 2025 11:50
@frisitano
Copy link
Collaborator Author

Tests are passing fine locally - I suspect there is some form of content in CI. Will merge and fix later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Indexer] Refactor Indexer to ChainOrchestrator L1 message validation
3 participants